CAGEF_services_slide.png

Lecture 01: Basics of R and Jupyter Notebooks

R_for_data_science.png


0.1.0 About Introduction to R

Introduction to R is brought to you by the Centre for the Analysis of Genome Evolution & Function (CAGEF) bioinformatics training initiative. This course was developed based on feedback on the needs and interests of the Department of Cell & Systems Biology and the Department of Ecology and Evolutionary Biology.

The structure of this course is a code-along style; It is 100% hands on! A few hours prior to each lecture, links to the materials will be avaialable for download at QUERCUS. The teaching materials will consist of a Jupyter Lab Notebook with concepts, comments, instructions, and blank spaces that you will fill out with R by coding along with the instructor. Other teaching materials include an HTML version of the notebook, and datasets to import into R - when required. This learning approach will allow you to spend the time coding and not taking notes!

As we go along, there will be some in-class challenge questions for you to solve either individually or in cooperation with your peers. Post lecture assessments will also be available (see syllabus for grading scheme and percentages of the final mark) through DataCamp to help cement and/or extend what you learn each week.

0.1.1 Where is this course headed?

We'll take a blank slate approach here to R and assume that you pretty much know nothing about programming. From the beginning of this course to the end, we want to get you from some potential scenarios:

and get you to a point where you can:

data-science-explore.png

0.1.2 How do we get there? Step-by-step.

In the first two lessons, we will talk about the basic data structures and objects in R, get cozy with the RStudio environment, and learn how to get help when you are stuck. Because everyone gets stuck - a lot! Then you will learn how to get your data in and out of R, how to tidy our data (data wrangling), subset and merge data, and generate descriptive statistics. Next will be data cleaning and string manipulation; this is really the battleground of coding - getting your data into the format where you can analyse it. After that, we will make all sorts of plots for both data exploration and publication. Lastly, we will learn to write customized functions and apply more advanced statistical tests, which really can save you time and help scale up your analyses.

Draw_an_Owl-2.jpg

The structure of the class is a code-along style: It is fully hands on. At the end of each lecture, the complete notes will be made available in a PDF format through the corresponding Quercus module so you don't have to spend your attention on taking notes.


0.2.0 Class Objectives

This is the first in a series of seven lectures. At the end of this session you will be familiar with the Jupyter Notebook environment and the R-kernel associated with it. You will know about basic data structures in R and how to create them. You will be able to install and load packages. Our topics are broken into:

  1. Familiarizing with Jupyter Notebooks, and the R-kernel.
  2. Getting started with programming
  3. Data types in R
  4. Installing and importing libraries
  5. Reading and writing files

Use: these concepts are necessary for coding best practices and to understand your data types for analysis.


0.3.0 A legend for text format in Jupyter markdown

Blue box: A key concept that is being introduced
Yellow box: Risk or caution
Geen boxes: Recommended reads and resources to learn R

0.4.0 Lecture and data files used in this course

0.4.1 Weekly Lecture and skeleton files

Each week, new lesson files will appear within your JupyterHub folders. We are pulling from a GitHub repository using this Repository git-pull link. Simply click on the link and it will take you to the University of Toronto JupyterHub. You will need to use your UTORid credentials to complete the login process. From there you will find each week's lecture files in the directory /2021-09-IntroR/Lecture_XX. You will find a partially coded skeleton.ipynb file as well as all of the data files necessary to run the week's lecture.

Alternatively, you can download the Jupyter Notebook (.ipynb) and data files from JupyterHub to your personal computer if you would like to run independently of the JupyterHub.

0.4.2 Live-coding HTML page

A live lecture version will be available at camok.github.io that will update as the lecture progresses. Be sure to refresh to take a look if you get lost!

0.4.3 Post-lecture PDFs and Recordings

As mentioned above, at the end of each lecture there will be a completed version of the lecture code released as a PDF file under the Modules section of Quercus. A recorded version of the lecture will be made available through the University's MyMedia website and a link will be posted in the Discussion section of Quercus.


1.0.0 What is R?

R is a statistical programming languge first developed by Ross Ihaka and Robert Gentleman at the University of Aukland, New Zealand around 1993 before becoming an open source project in 1997. It is based on a programming language S and was named in part as an hommage to this inspiration as well as it's original developers.

While this language started as an experiment by the original authors, it soon surpassed the utility and function of its predecessor and is now one of the most powerful statistical programming languages and amongst some of the most popular data science programming languages.

1.0.1 Why learn R?

While our friend Python may be the Belle of the ball for many data scientists, R was built for statistical analysis and has been extensively developed by the community to produce publication-quality visualizations. You'll find many helpful biology/data science packages are built for R as well including:

More importantly, YOU may have data or a problem in your own studies that you want to solve. The techniques and methods you'll learn in this course will be the foundation of the data science journey towards understanding your data or conquering your problem!


1.1.0 Jupyter Notebooks and the R-kernel

Work with your Jupyter Notebook on the University of Toronto JupyterHub will all be contained within a new browser tab with the address bar showing something like https://jupyter.utoronto.ca/user/assigned-username-hexadecimal/tree/2021-09-IntroR.

All of this is running non-locally on a University of Toronto server rather than your own machine. You'll see a directory structure from your home folder:

ie \2021-01-IntroR\ and a folder to Lecture_01 within. Clicking on that, you'll find Lecture_01.skeleton.ipynb which is the notebook we will use for today's code-along lecture.

Jupyter.screenshot.png

1.1.1 Why is this class using Jupyter Notebooks?

We've implemented the class this way to reduce the burden of having to install various programs. While installation can be a little tricky, it's really not that bad. For this introduction course, however, you don't need to go through all of that just to learn the basics of coding.

Jupyter Notebooks also give us the option of inserting "markdown" text much like what your reading at this very exact moment. Therefore we can intersperse ideas and information between our demonstration code cells.

There is, however an appendix section at the end of this lecture detailing how to install Jupyter Notebooks (and the R-kernel for them) as well as independent installation of the R-kernel itself and a great integrated development environment (IDE) called RStudio. Check out section 8.0.0 for more information.


1.2.0 A quick intro to the R environment

R is a language and an environment because it has the tools and software for the storage, manipulation, statistical analysis, and graphical display of data. It comes with about 25 built-in 'packages' and uses a simple programming language ("S"). The core information and programming that makes up R is called the kernel. We may refer to this concept interchangeably as the R-kernel or r-base. A useful resource is the "Introduction to R" found on CRAN.

More than just popcorn: The R-kernel interprets the human-readable code we create (syntax) to perform operations behind the scenes. By combining the available basic functions provided by the R-kernel, we can create more complex actions culminating in output from mathematical analysis to beautiful data graphs.

R_console.png


1.2.1 Packages contain useful functions

So... what are in these packages? A package can be a collection of

Functions are the basic workhorses of R; they are the tools we use to analyze our data. Each function can be thought of as a unit that has a specific task. A function (usually) takes input, evaluates it using an expression (e.g. a calculation, plot, merge, etc.), and returns an output (a single value, multiple values, a graphic, etc.).

In this course we will rely a lot on a package called tidyverse which, itself, is also dependent upon a series of other packages.

1.2.2 Useful packages are archived with CRAN and Bioconductor

Users have been encouraged to make their own packages. There are now over 12,000 packages on R repositories (banks of packages), including 12,000 on CRAN (Comprehensive R Archive Network) and about 1,500 on Bioconductor.

cran_pkg.png

The "Comprehensive R Archive Network" (CRAN) is a collection of sites that have the same R and related R material:

Different sites (for example, we used http://cran.utstat.utoronto.ca/), are called mirrors because they reflect the content from the master site in Austria. There are mirrors worldwide to reduce the burden on the network. CRAN will be referred to here as a main repository for obtaining R packages.

Bioconductor is another repository for R packages, but it specializes in tools for high-throughput genomics data. One nice thing about Bioconductor is that it has decent vignettes. A vignette is the set of documentation for a package, explaining its functions and usages in a tutorial-like format.


1.3.0 Jupyter notebooks run programming language kernels like R

Behind the scenes of each Jupyter notebook a programming kernel is running. For instance, our notebooks run an "emulated" R-kernel to interpret each code cell as if it were written specifically for the R language.

As we move from code cell to new code cell, all of objects we have created are stored within memory. We can refer to these as we run the code and move forward but if you overwrite or change them by mistake, you may to have rerun multiple cell blocks!

There are some options in the "Cell" menu that can alleviate these problems such as "Run All Above". If you think you've made a big error by overwriting a key object, you can use that option to "re-initialize" all of your previous code!

Remember these friendly keys/shortcuts:

In Command mode

1.3.1 Why would you want to use a Jupyter Notebook?

Depending on your needs, you may find yourself doing the following:

Jupyter allows you to alternate between "markdown" notes and "code" that can be run or re-run on the fly.

Each data run and it's results can be saved individually as a new notebook to compare data and small changes to analyses!


1.4.0 RStudio is an integrated development environment (IDE) for R

A flagship IDE for R is RStudio. It runs the R-kernel but offers additional tools and interfaces that allow the user and programmer to see and understand their code much better than just R by itself.

RStudio simplifies some basic tasks like

1.4.1 Why would you want to use R and RStudio?

"What if I'm doing more than just running data through packages?"

As a development environment RStudio offers features like debugging, and access to environmental variable states. It is a fully integrated development enivronment that makes it easy to look for help on package and function information, saving data states to come back to later, working on multiple scripts that may reference into each other. It has decent user interface that can make looking at certain objects like "tables" much easier too.


1.5.0 What should I use?

I suggest you try out both! Find what's comfortable for you and experiment with whatever works best for your needs!

Personally I use R/RStudio to generate code but after building this class as a Jupyter Notebook, this really is a good tool for running smaller code snippets, especially in the context of working or talking with supervisors and collaborators. Many times, they may want to know something like

You can make quick changes on the fly and see the results there in the notebook without pulling up extra windows or programs. New runs can be saved in different versions of the notebook with quick footnotes on what has changed.

Again, consider it on a case-by-case basis...


1.6.0 Making Life Easier

Let's discuss some important behaviours before we begin coding:

1.6.1 Annotate your code with #

Why bother?

Your worst collaborator is potentially you in 6 days or 6 months. Do you remember what you had for breakfast last Tuesday?

real_programmers.jpg

You can annotate your code for selfish reasons, or altruistic reasons, but annotate your code.

How do I start?

Comments may/should appear in three places:


# At the beginning of the script, describing the purpose of your script and what you are trying to solve

bedmasAnswer <- 5 + 4 * 6 - 0 #In line: Describing a part of your code that is not obvious what it is for.

Maintaining well-documented code is also good for mental health!

Keyboard shortcuts in RStudio:


1.6.2 Naming conventions for files, objects, and functions

Basically, you have the following options:

The most important aspects of naming conventions are being concise and consistent!

1.6.3 Best Practices for Writing Scripts


1.7.0 Trouble-shooting basics

GooglingForProgrammers.jpg

We all run into problems. We'll see a lot of mistakes happen in class too! That's OK if we can learn from our errors and quickly (or eventually) recover.

1.7.1 Common errors

1.7.2 Beginner Advice

At this level, many people have had and solved your problem. Beginners get frustrated because they get stuck and take hours to solve a problem themselves. Set your limit, stay within it, then go online and get help.

1.7.3 Finding answers online

1.7.3.1 Asking a question

StackOverflow.png

Remember: Everyone looks for help online ALL THE TIME. It is very common. Also, with programming there are multiple ways to come up with an answer, even different packages that let you do the same thing in different ways. You will work on refining these aspects of your code as you go along in this course and in your coding career.

Last but not least, to make life easier: Under the Help pane, there is a Cheatsheet of Keyboard Shortcuts or a browser list here.


2.0.0 Using R

Remember that before we are up and running we really need to lay the foundation for this complex language. Enough theory. Let's get started!

2.1.0 Basic use of R

R can do anything your basic, scientific, or graphic calculator can.

2.1.1 Basic math and functions

2.1.2 Plot equations


2.2.0 Functions do the work for us

You may have noticed above that we did some crazy looking stuff involving parentheses ( ). There are actually many functions for ( ) within R but this is all dependent upon context.

  1. Most broadly we use ( ) to contain or separate actions and expressions. The development of R centres around a much older programming language but, in a nutshell, everything is evaluated from the innermost ( ) to the outermost set of ( ).

  2. A secondary purpose of ( ) is to indicate to R that you would like to activate a function by passing the contents of ( ) to the pre-existing function. This takes the form of

    function_name(argument_1 = parameter_1, argument_2 = parameter_2, .., argument_n = parameter_n).

    or more simply

    function_name(parameter_1, parameter_2, ..., parameter_n) but parameter order in this case is quite important.

We'll talk about the structure of functions in more detail as the course progresses BUT know that

  1. functions are used to perform common operations that may combine multiple actions or calculations.
  2. functions are programmed or defined they use arguments as a way to retrieve input or information to perform their jobs.
  3. functions may or may not return a value upon their completion.

2.2.1 Use the help() function or ? to learn more about functions

Often you may forget what the simple or complicated requirements of a function are but you can use ? or help(function_name) to retrieve a description of a function which includes a description of the input arguments and output (if any) that is returned.


3.0.0 A [quick] intro to R's variables, data types, and data structures

3.1.0 Assigning variables

Up until now we've simply been calculating with R and the output appears after the code cell. There is nothing left behind in the R interpreter or memory. If we wanted to hold onto a number or calculation we would need to assign it to a named variable. In fact R has multiple methods for assigning a value to a variable and an order of precedence!

-> and ->> Rightward assignment: we won't really be using this in our course.

<- and <<- Leftward assignment: assignment used by most 'authentic' R programmers but really just a historical throwback.

= Leftward assignment: commonly used token for assignment in many other programming languages but carries dual meaning!

Notes

Let's try some exercises.

3.1.1 Blank spaces are usually ignored by the interpreter

Spaces are used to separate between commands and variables as the code is run but the total number of spaces is irrelevant to the interpreter when it is running your code.

Let's see it in action

A more complex example

for (i in unique(raw_area_long$date)){sub.dat<-subset(raw_area_long, raw_area_long$date==i) umoles<-(((sub.dat$area[sub.dat$date==i]-calib_coeff$intercept[calib_coeff$date==i])/calib_coeff$slope[calib_coeff$date==i])/1000)*sub.dat$headspace_mL[sub.dat$date==i]}

Using spaces to organize the above code, we can clarify what's happening! Notice we even use indentation to help sort out the flow of our code. We'll talk more about that in detail in lecture 07.

for (i in unique(raw_area_long$date)) { sub.dat <- subset (raw_area_long, raw_area_long$date == i) umoles <- ( ( ( sub.dat$area[sub.dat$date == i] - calib_coeff$intercept [calib_coeff$date == i] ) / calib_coeff$slope [ calib_coeff$date == i] ) / 1000 ) * sub.dat$headspace_mL }

3.1.1.1 A special case where blankspace matters

Under some special circumstances, spaces are required, e.g. when using function paste() and its argument sep = ' '.


3.1.2 R calculates from the right side first before (leftward) assignment

R calculates the right side of the assignment first the result is then applied to the left. This is a common paradigm in programming that simplifies variable behaviours for counting and tracking results as they build up over time.

This also allows us to increment variables or manipulate objects to update them!

This behaviour can be extended in a more complex fashion to encompass multiple variables

Remember! Variables are specific identifiers/placeholders that allow us to access our data. We can change the nature and size of that data simply by reassigning the value of the variable.
Caution! Variable names are case-sensitive. When assigning variables, try to use original, descriptive names to reduce errors in your code. You can use the Tab key to help autocomplete your code.

3.2.0 Data types are the basic building blocks of R

Data types are used to classify the basic spectrum of values that are used in R. Here's a table describing some of the common data types we'll encounter.

Data type Description Example
character Can be single or multiple characters (strings) of letters and symbols. Assigned using double ' or " a#c&E
integer Whole number values, either positive or negative 1
double Any number that is not an integer 7.5
logical Also known as a boolean, representing the state of a conditional (question) TRUE or FALSE
NA Represents the value of "Not Available" usually seen when imported data has missing values NA

3.2.1 Data structures hold single or multiple values

The job of data structures is to "host" the different data types. There are five basic types of data structures that we'll use in R:

Data structure Dimensions Restrictions
vector 1D Holds a single data type
matrix 2D Holds a single data type
array nD Holds a single data type
data frame 2D Holds multiple data types with some restrictions
list 1D (technically) Holds multiple data types AND structures

data_structures.jpg


3.2.2 Scalars - "One is the loneliest data type"

One single value from any of the above data types. It is the smallest possible "unit" of data within R.


3.2.3 Vectors are like a queue of a single data type

There is a numerical order to a vector, much like a queue AND you can access each element (piece of data) individually or in groups.

Here are what vectors of each data 'type' would look like.

Note: character items must be in quotations while 'L' is placed next to a number to specify an integer rather than a double.

3.2.3.1 Coercion changes data from one type to another (if possible)

R will coerce (force) your vector to be of one data type, in this case the type that is most inclusive is a character vector. When we explicitly force a change from one data type to the next, it is known as conversion or casting.

3.2.3.2 What happened to our data??

Let's highlight the above error for a couple of reasons:

  1. Keep your data types in mind. It is good practice to look at your object or the global environment to make sure the object that you just made is what you think it is.

  2. It can be useful for data analysis to be able to switch from TRUE/FALSE to 1/0, and it is pretty easy, as we have just seen.

Learn more about data-type coercion: If you're interested in learning the order of operations for coercion, you can find more information on how R handles it in R in a nutshell

3.2.3.3 Vector contents can be individually named using the name() function

Within a vector, each individual element can be assigned to a character-based name. This can act as a way to locate values based on what they represent and not by their position within the vector.

3.2.3.4 Use length() to identify the number of elements in a vector

Remember that a vector is a container for your data which you can think of as a queue of boxes where each box contains a value. We can retrieve the length of this queue using the length() function. We'll learn additional functions later that we can apply broadly to retrieve information about various objects.

3.2.3.5 Use the [ ] indexing notation to extract values

For most data structures in R, you can use index notation to extract values from the object. To accomplish this, use the square brackets [ ], separating dimensions using a ,. You can create indices using:

These indices can be supplied singly, as a vector with c(), or a range with start:end. Throughout the course, we may also refer the act of indexing portions of a data structure as slicing.

Watch out! Indexing in R follows real-world arithmetic notation where vectors are represented as n-tuples indexed from 1 to n. This might be unfamiliar if you're coming from a 0-indexed system like C++, Java, or Python.

3.2.4 Matrices are 2-dimensional containers of a single data type

Thus matrices are like a 2D version of vectors. They can be accessed similarly to vectors but in a [row,column] format

3.2.4.1 A reminder about functions inside functions

Recall that in R, functions within functions are read inside-out, i.e. moving from the inner most parenthesis and outwards:

matrix(c(rep(0, 10), rep(1,10)), nrow = 5, ncol = 5)

Here the two rep(...) functions will be evaluated before evaluating matrix(...)

Note that the rep(value, times) function produces a vector by repeating the parameter value by the specified parameter times.

What has happened here? Look up the matrix() function. Why has R not thrown an error? How would I make this same matrix without vector recycling? Can you think of 2 ways?

Notice how the matrices are "filled" one column at a time?

Challenge: What do you think this matrix will look like?

my_matrix <- matrix(c(0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,1,1,0), nrow = 5, ncol = 5)

3.2.4.2 Bracket and parentheses location matters!

What happened above?

R code is evaluated inside-out but the brackets here are poorly positioned. With the command above you end up with a single column matrix of numbers equivalent to c(0x10, 1x10, 5, 5).

Remember to be mindful of your bracket placement or you'll be in for some headaches!


3.2.4.3 Challenge

Make a 4 x 4 matrix that looks like this, using the seq() function at least once.

2   4   6   8
10  12  3   6
9   12  0   1
0   1   0   1

seq() produces a vector of numbers using the the parameters from, to, and by which makes the process of generating a pattern of numbers much simpler for you.

3.2.4.4 A matrix is a 2D object

As you've noticed by now, the matrix is a 2D object so there are a few more properties and tricks to it than a simple vector. We can use a number of useful functions to gain insights about our object:

Let's try these out and see for ourselves.


3.2.4.5 Use [row, column] notation to access portions of a matrix

Recall the [ ] indexing notation from vectors can be applied to matrices as well. The major difference is the requirement to use a , even when "slicing" a matrix only by rows or columns. Leaving an empty space before or after the comma is equivalent to "all".


3.2.4.6 Subsetting a matrix returns a vector or matrix

Note that when we are sub-setting a single row or column, we end up with a vector, otherwise another matrix is returned.


3.2.5 Data Frames

3.2.5.1 Object classes

Now that we have had the opportunity to create a few different objects, let's talk about what an object class is. An object class can be thought of as how an object will behave in a function. Because of this

Some R package developers have created their own object classes. We won't deal with this today, but it is good to be aware of from a trouble-shooting perspective that your data may need to be formatted to fit a certain class of object when using different packages.


3.2.5.2 Data frames are groups of vectors disguised as matrices

Whereas matrices are limited to a single specific type of data within each instance, data frames are like vectors to the extent that they can hold different types of data. More specifically

  1. Within a column, all members must be of the same data type (ie character, numeric, Factor, etc.)
  2. All columns must have the same number of rows (hence the matrix shape)

This object allows us to generate tables of mixed information much like an Excel spreadsheet.

To make a new data frame (instantiate it), we use the data.frame() function which takes the form of:

data.frame(column_name1 = vector1, column_name2 = vector2, ..., column_nameN = vectorN)

Many R packages have been made to work with data in data frames, and this is the class of object where we will spend most of our time.

Let's use some of the functions we have learned for finding out about the structure of our data frame.

3.2.5.3 Casting a matrix to a data frame using as.data.frame()

We can also convert between data types if they are similar enough. For example, I can convert my matrix into a data frame. Since a data frame can hold any type of data, it can hold all of the numeric data in a matrix.

3.2.5.4 You can access and rename data frame columns (like a header!) with colnames()

Notice that after converting our matrix, the column names have been automatically assigned to generic identifiers. Sometimes you may wish to rename these for whatever reason. You can even rename specific columns as you see fit.

3.2.5.5 You can't always cast a data frame to a matrix as expected

Casting (like coercion) can only be accomplished if the objects or data types (within) are compatible. We can convert our new_data_frame to a matrix but what about my_data_frame?

Notice that the numeric vector is now character!


3.2.5.6 Some useful data frame commands (for now)

nrow(new_data_frame) # retrieve the number of rows in a data frame

ncol(new_data_frame) # retrieve the number of columns in a data frame

new_data_frame$column_name # Access a specific column by it's name

new_data_frame[x,y] # Access a specific element located at row x, column y

There are many more ways to access and manipulate data frames that we'll explore further down the road


3.2.6 Arrays

Arrays are n dimensional objects that hold a single data type. It may be simpler to think of arrays as multiple matrices stacked upon one another. It explains why you are held to a single data type with arrays as they are just an extension of matrices, which are an extension of vectors. You might find these useful for multi-variable experiments that are completed in replicate. You could separate either replicates, conditions, or populations into different dimensions for instance.

To create an array, we give a vector of data to fill the array, and then the dimensions of the array. This code will recycle the vector 1:10 and fill 5 arrays that have 2 x 3 dimensions. To visualize the array, we will print it afterwards.


3.2.6.1 Access elements from an array

You can access the elements within an array much like a vector, data frame, or list using the format [row, column, matrix_number] although you could have more dimensions than just 3 so just keep separating dimensions with a ,.

MostInterestingArray.jpg

That's not entirely true as I personally don't often use arrays per se but I have created array-like objects with lists! I wouldn't worry about it too much but you may encounter these objects every once in while.

Speaking of lists...


3.2.7 Lists are amorphous bundles strung together with code

Lists can hold mixed data types of different dimensions. These are especially useful for bundling data of different types for passing around your scripts! Rather than having to call multiple variables by name, you can store them in a single list!

We use the list() function to instantiate a list. Like a vector, we can specifically name each element/object within a list. The elements of a list are also indexed in the order of their initial creation.


3.2.7.1 Lists can get complicated.

If you forget what is in your list, use the str() function to check out its structure. It will tell you the number of items in your list and their data types. Notice that R has chosen the data types of our vectors for us when we first instantiate them into mixed_list.

You can (and should) call str() on any R object. You can also try it on one of our vectors.


3.2.7.2 Accessing elements from a list with [[ ]] and [ ]

Accessing lists is much like opening up a box of boxes of chocolates. You never know what you're gonna get when you forget the structure!

You can access elements with a mixture of number and naming annotations:


3.2.7.3 Use additional levels of [ ] notation to access sub-elements of a list

Unlike a data frame or array object, where we can access individual elements with simple [x, y, z] notation, we need to take an extra step to retrieve our list elements directly, at which point we can access them based on the appropriate notation. list objects are a container for other objects and are therefore agnostic to the nature of these objects.


3.2.8 Factors represent categorical data

Ah, the dreaded factors! A factor is a class of object used to encode a character vector into categories. They are mainly used to store categorical variables and although it is tempting to think of them as character vectors this is a dangerous mistake (you will get betrayed, badly!).

Factors make perfect sense if you are a statistician designing a programming language (!) but to everyone else they exist solely to torment us with confusing errors. A factor is really just an integer vector or character data with an additional attribute, called Levels, which defines the possible values.

This is used by the R kernel to simplify the process of organizing data based on its categories and also restricts the labeling of data.

Why not just use character vectors, you ask?

Believe it or not factors do have some useful properties. For example, factors allow you to specify all possible values a variable may take even if those values are not in your data set. Think of conditional formatting in Excel.

We can directly convert a vector to a factor using the factor() function.


3.2.8.1 Use levels() to access factors information

As we'll see later down the road, you may wish to know how many categories you are working with and what their labels are. You can access this information directly with the levels() function which will return a vector object.


3.2.8.2 Coerce you factor to an integer with as.integer()

That's right, under the hood a factor is just a fancy integer representation of your data, mapped to a set of categories. Thus we can cast or coerce it to an integer without much issue.


3.2.8.3 A brief note about R 4.0.x versus r 3.x.x

Since the inception of R, data.frame() calls have been used to create data frames but the default behaviour was to convert strings (and characters) to factors! This is a throwback to the purpose of R, which was to perform statistical analyses on datasets with methods like ANOVA (lecture 06!) which can examine the relationships between categorical variables (ie factors)!

As R has become more popular and its applications and packages have expanded, incoming users have been faced with remembering this obscure behaviour, leading to lost hours of debugging grief as they wondering why they can't pull information from their dataframes to do a simple analysis on C. elegans strain abundance via molecular inversion probes in datasets of multiplexed populations. #SuspciouslySpecific

That meant that users usually had to create data frames including the toggle

data.frame(name=character(), value=numeric(), stringsAsFactors = FALSE)

3.2.8.4 The default behaviour of data.frame() creation does not make factors

Fret no more! As of R 4.x.x the default behaviour has switched and stringsAsFactors=FALSE is the default! Now if we want our characters to be factors, we must convert them specifically, or turn this behaviour on at the outset of creating each data frame!


3.2.8.5 Specify factors during data frame creation with stringsAsFactors or as.factor()

Depending on your needs, you can specify that all columns of strings be converted to factors with the stringsAsFactors parameter or you can coerce specific columns as factors when initializing them using the as.factor() function.

If we look at the structure again, we still have 3 levels. This is because each unique character element has been encoded as a number. (Note that a column can be subset by index or by its name using the '$' operator.)


3.2.8.6 Factor levels are ordered alphabetically by default

Note that the first character object in the data frame is 'bacteria', however, the first factor level is archaea. R by default puts factor levels in alphabetical order. This can cause problems if we aren't aware of it.

Always check to make sure your factor levels are what you expect.

With factors, we can deal with our character levels directly, or their numeric equivalents. Factors are extremely useful for performing group calculations as we will see later in the course.


3.2.8.7 You can specify the order of your factor levels using the levels parameter

Look up the factor() function. Use it to make 'bacteria' the first level, 'virus' the second level, and 'archaea' the third level for the data frame 'my_data_frame'. Bonus if you can make the level numbers match (1,2,3 instead of 2,3,1). Use functions from the lesson to make sure your answer is correct.

Caution: By default, factor() will set your levels using all of the unique values in your vector. However, if you use the levels parameter any values that are excluded in your list will automatically be set to a value of NA. We'll talk more about what NA values are in a bit!

3.2.8.8 You can specify an order of precedence in your factor levels

For certain reasons/models that we will likely not cover in this course, you can make your factors ordered which means that there is an order of precedence. This inherent informational order can be used to your advantage when working with data.


3.2.8.9 You can relabel your factor values with the labels parameter but be careful!

Note that you can also label your factors when you make them. You need to be extremely careful with this. You may have good reasons to do this but remember that you are labeling the integer that is associated with the factor level after it has been converted. This is the equivalent of relabeling your data!

Let's see what that means!


3.2.8.10 The factor() function applies a default level behaviour before applying the labels parameter

What just happened to our factor levels?

When we called factor(my_data_frame$character, labels = c('label_1', 'label_2', 'label_3')) there was an order of operations that occurred.

  1. factor() was used to cast the vector c('bacteria', 'virus', 'archaea') into a factor and the levels were assigned by alphabetical order. In this case the defauly behaviour was equivalent to levels = c('archaea', 'bacteria', 'virus'). If we look back at the order of our vector that makes it (2,3,1).
  2. Then we explicitly specify in our call to factor() to re-label those integer values with labels in this order: 1='label_1', 2='label_2', 3='label_3'.
  3. This gives the final result that our variable "character" in my_data_frame is now renamed for output as c('label_2', 'label_3', 'label_1') which is completely incorrect from our original data set.

Imagine if we had used the code labels = c('bacteria', 'virus', 'archaea')? It would relabel everything incorrectly. Give it a try yourself!

Now we'll apply our labels after explicit leveling.

FactorsEverywhere.jpg

For the most part, factors are important for various statistics involving categorical variables, as you'll see for things like data visualizations (lecture 04) and linear models (lecture 06!). Love 'em or hate 'em, factors are integral to using R so better learn to live with them.


3.3.0 Mathematical operations on data frames and arrays

Yes, you can treat data frames and arrays like large lists where mathematical operations can be applied to individual elements or to entire columns or more!

First, let's take a look at our data frame


3.3.0.1 Mathematical operations are applied differently depending on data type

Therefore be careful to specify your numeric data for mathematical operations.


3.3.1 Using the apply() function to perform actions across data structures

The above are illustrative examples to see how our different data structures behave. In reality, you will want to do calculations across rows and columns, and not on your entire matrix or data frame.

For example, we might have a count table where rows are genes, columns are samples, and we want to know the sum of all the counts for a gene. To do this, we can use the apply() function. apply() Takes an array, matrix (or something that can be coerced to such, like a numeric data frame), and applies a function over rows or columns. The apply() function takes the following parameters:

and returns a vector, array or list depending on the nature of X.

Let's practice by invoking the sum function.

Note that the output is no longer a data frame. Since the resulting sums would have the dimensions of a 1x4 matrix, the results are instead coerced to a named numeric vector.


3.3.1.1 The apply() function will recognize basic functions.

Passing a function object: Again you'll note above that we called upon the sum() function but only used sum when supplying it as a parameter to apply(). In this case, we are passing the sum function as a name for the kernel to search internally. It will look in memory for reference to this function and pass that information along to be used on each element of X. Using sum() will cause an error since that does not exist in the kernel's memory list of R functions.

When all data values are transformed, the output is a numeric matrix.

3.3.1.2 Supply a custom function to apply()

What if I want to know something else? We can create a function. The sum function we called before can also be written as a function taking in x (in this case the vector of values from our coerced data frame row by row) and summing them. Other functions can be passed to apply() in this way.

Use the apply() function to multiply the counts for each gene by 3.


3.3.2 Special data: NA and NaN values

Missing values in R are handled as NA or (Not Available). Impossible values (like the results of dividing by zero) are represented by NaN (Not a Number). These types of values can be considered null values. These two types of values, especially NAs, have special ways to be dealt with otherwise it may lead to errors in the functions.

First lets build an example containing NA values.


3.3.2.1 Some functions can be told to ignore or remove NA values

Some mathematical functions can ignore NA value by setting the logical parameter na.rm = TRUE. Under the hood, if the function recognizes this parameter, it will remove the NA values before proceeding to perform its mathematical operation.


3.3.2.2 What happens when we try to use functions via apply() on data with NAs?

Now, I am going to take the earlier counts table and add a few NAs. If I now try to calculate the mean number of counts, I will get NA as an answer for the rows that had NAs.


3.3.2.3 Use the is.na() function to check your data

How do we find out ahead of time that we are missing data? Knowing is half the battle and is.na() can help us determine this with some data structures. The is.na() function can search through data structures and return a boolean structure of the same dimensions.

With a vector we can easily see how some basic functions work.


3.3.2.4 Find what you're looking for with the which() function

Using is.na() we were returned a logical vector of whether or not a value was NA. There are some ways we can apply this information through different functions but a useful method applicable to a vector of logicals is to ask which() positional indices return TRUE.

In our case, we use which() after checking for NA values in our object.


3.3.2.5 The na.omit() function will remove NA entries

In addition to our combination of functions above, the na.omit() function can return an object where the NA values have been deleted in a listwise manner. This means complete cases (ie rows in a data frame) will be removed instead. Keeping this in mind, you can also use this on a vector.


3.3.2.6 The any() function evaluates logical vectors

Sometimes we are just interested in knowing if at least one of our logical values matches to TRUE. That is accomplished using the any() function which can evaluate multiple vectors (or data frames), answering which of those has at least one TRUE value.

We can use it to quickly ask if our data frame has any NA values. Recall our dataframe counts from before:


3.3.2.7 Use complete.cases() to query larger objects

We have verified in many ways that we have at least one NA value in counts. Often We may wish to drop incomplete observations where one or more variables is lacking data.

With a large data frame, it may be hard to look at every cell to tell if there are NAs. You may encounter data sets which are missing values or have NA values due to experimental limitations. How do you easily select out rows of invalid data from your analysis?

The function complete.cases() looks by row to see whether any row contains an NA and returns a boolean vectors representing each row within the dataframe. You can then subset out the rows with the NAs using conditional indexing.

Conditional indexing: That's right! We used conditional indexing above in section 3.3.2.4 to remove NA values from our na_vector. A data structure of booleans (TRUE and FALSE) can be used to select elements from within another data structure, as long as the relevant dimensions match!

3.3.2.8 There are similar functions to handle other types of null values

We will focus more on the apply() family of functions later on in the course.

You can similarly deal with NaN's in R. NaN's (not a number) are NAs (not available), but NAs are not NaN's. NaN's appear for imaginary or complex numbers or unusual numeric values. Some packages may output NAs, NaN's, or Inf/-Inf (can be found with is.finite() ).

3.3.2.9 Consider just replacing the NAs with something useful

Basically, if you come across NaN's, you can use the same functions such as complete.cases() that you use with NAs.

Depending on your purpose, you may replace NAs with a sample average, or the mode of the data, or a value that is below a threshold.


4.0.0 Installing and importing packages

Packages are groups of related functions that serve a purpose. They can be a series of functions to help analyse specific data or they could be a group of functions used to simplify the process of formatting your data (more on that in lecture 02).

Depending on their structure they may also rely on other packages.

4.1.0 Locating packages

There are a few different places you can install packages from R. Listed in order of decreasing trustworthiness:

Regardless where you download a package from, it's a good idea to document that installation, especially if you had to troubleshoot that installation (you'll eventually be there, I promise!)

devtools is a package that is used for developers to make R packages, but it also helps us to install packages from GitHub. It is downloaded from CRAN.


4.2.0 Installing packages for your Jupyter Notebook (on JupyterHub)

Installing packages through your JupyterHub notebook is relatively straightforward but any packages you install only remain during your current instance (login) of the hub. Whenever you logout from the JupyterHub, these installed libraries will essentially vaporize.

The install.packages() command will work just as it should in R and RStudio. Find instructions in the Appendix section for installation of packages into your own personal Anaconda-based installation of Jupyter Notebook.


4.2.1 Will it or won't it install? Check for dependencies!

R may give you package installation warnings. Don't panic. In general, your package will either be installed and R will test if the installed package can be loaded, or R will give you a non-zero exit status - which means your package was not installed. If you read the entire error message, it will give you a hint as to why the package did not install.

Some packages depend on previously developed packages and can only be installed after another package is installed in your library. Similarly, that previous package may depend on another package and so on. To solve this potential issue we use the dependencies logical parameter in our call.


4.2.2 Use library() to load your packages after installation

A package only has to be installed once. It is now in your library. To use a package, you must load the package into memory. Unless this is one of the packages R loads automatically, you choose which packages to load every session.

library() Takes a single argument. library() will throw an error if you try to load a package that is not installed. You may see require() on help pages, which also loads packages. It is usually used inside functions (it gives a warning instead of an error if a package is not installed).

Errors versus warnings: So far we've seen that errors will stop code from running. Warnings allow code to run until an error is reached. An eventual error may not be the result of a warning but it certainly leaves your code vulnerable to errors down the road.

4.2.2.1 Use lapply() to load several libraries simultaneously using a vector of package names

myPackages <- c("package_1", "package_2", "package_2")

lapply(myPackages, library, character.only = TRUE)

4.2.2.2 Loading packages from bioconductor requires BiocManager()

To install from Bioconductor you can either always use BiocManager() to help pull down and install packages from the Bioconductor repository.


4.2.2.3 Skip loading a library with package::function()

As mentioned above, devtools is required to install from GitHub. We don't actually need to load the entire library for devtools if we are only going to use one function. We select a function using this syntax package::function().

All packages are loaded the same regardless of their origin, using library().

Now, install and load the packages tidyverse and limma.


5.0.0 Read and write files

Now for the last section of our lecture, it is extremely useful to know how to read and write files. Inevitably you'll want to use R to analyse your (or someone else's) data and save those results somewhere. Over the course of this lecture series we'll see a few different methods of data import and export but we'll touch on some basics right now.

5.1.0 Writing files in R

Note on R built-in data sets: Let's explore the iris data set that comes with R (this an many other data sets were automatically installed when you installed R). Run the data() function from MASS package to see all available data sets.

Note on data set formatting: R works best when each column is a single variable (length, size, pressure) and each row contains one and only observation per variable. Thus, columns are called "variables" and rows are called "observations". We will go over proper format for data sets in lecture 3. Let's explore the iris data set:

5.1.1 R will overwrite files without warning you!

Make sure that there are no other files named iris_data set in your working directory because they'll be overwritten without warning (will be gone forever!!!!!!!!!!!!!!!! unless you have backups, which you should ALWAYS have for important files). Let's write iris as a comma separated (csv) file to your working directory with write.csv().

Write iris as a comma separated (csv) file wherever you want on you computer without changing your working directory:


5.2.0 Reading files in R

There are a number of methods or functions that we can use to read in files. Many of these functions are targeted for specific types of data formats but one you might use often is the read.csv() command. I personally use read.table() quite often as well.

5.2.1 A preview into next week's lecture

Next week we'll talk more about your data and quick ways to manipulate and tidy it up some more. For now let's quickly look at the functions head() and tail()

5.2.2 A preview even further down the road

By the end of this course, these commands will seem like second nature to you but we won't learn some of them until lecture 04 (plot() and boxplot()) and even lecture 06 (qqnorm() and qqline()).

Still, let's try some commands on our dataset to get a feel for your future!


6.0.0 Class summary

That's a wrap for our first class on R! You've made it through and we've learned about the following:

  1. Best practices in R.
  2. Basic functions in R.
  3. Variables, data types and data structures.
  4. Installing packages.
  5. Reading and Writing files in R.

6.1.0 Post-lecture assessment (12% of final grade)

Soon after the end of each lecture, a homework assignment will be available for you in DataCamp. Your assignment is to complete all chapters from the Introduction to R course which has a total of 6200 points. This is a pass-fail assignment, and in order to pass you need to achieve a least 4,650 points (75%) of the total possible points. Note that when you take hints from the DataCamp chapter, it will reduce your total earned points for that chapter.

In order to properly assess your progress on DataCamp, at the end of each chapter, please take a screenshot of the entire course summary. You'll see this under the "Course Outline" menubar seen at the top of the page for each course and you'll want to expand each section. It should look something like this:

DataCampIntroR.jpg

You may need to take several screenshots if you cannot print it all in a single try. Submit the file(s) or a combined PDF for the homework to the assignment section of Quercus. By submitting your scores for each section, and chapter, we can keep track of your progress, identify knowledge gaps, and produce a standardized way for you to check on your assignment "grades" throughout the course.

You will have until 13:59 hours on Thursday, September 23rd to submit your assignment (right before the next lecture).


6.2.0 Acknowledgements

Revision 1.0.0: materials prepared in R Markdown by Oscar Montoya, M.Sc. Bioinformatician, Education and Outreach, CAGEF.

Revision 1.1.0: edited and preprared in Jupyter Notebook by Calvin Mok, Ph.D. Bioinformatician, Education and Outreach, CAGEF.


6.3.0 Your DataCamp academic subscription

This class is supported by DataCamp, the most intuitive learning platform for data science and analytics. Learn any time, anywhere and become an expert in R, Python, SQL, and more. DataCamp’s learn-by-doing methodology combines short expert videos and hands-on-the-keyboard exercises to help learners retain knowledge. DataCamp offers 350+ courses by expert instructors on topics such as importing data, data visualization, and machine learning. They’re constantly expanding their curriculum to keep up with the latest technology trends and to provide the best learning experience for all skill levels. Join over 6 million learners around the world and close your skills gap.

Your DataCamp academic subscription grants you free access to the DataCamp's catalog for 6 months from the beginning of this course. You are free to look for additional tutorials and courses to help grow your skills for your data science journey. Learn more (literally!) at DataCamp.com.

DataCampLogo.png


7.0.0 Resources

http://archive.ics.uci.edu/ml/datasets/Adult
https://github.com/patrickwalls/R-examples/blob/master/LinearAlgebraInR.Rmd
http://stat545.com/block002_hello-r-workspace-wd-project.html
http://stat545.com/block026_file-out-in.html http://sisbid.github.io/Module1/
https://github.com/tidyverse/readxl
https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf
https://swcarpentry.github.io/r-novice-inflammation/06-best-practices-R/
http://stat545.com/help-general.html
https://stackoverflow.com/help/how-to-ask
https://www.r-project.org/posting-guide.html
http://www.quantide.com/ramarro-chapter-07/

7.1.0 Summary of what we learned today:

7.2.0 Post-Lesson Assessment

Complete Introduction to R assignment on DataCamp

7.3.0 Preparation for next time

If you'll be running your own installation of Jupyter Notebook or R/RStudio, please install the following packages from the conda-forge channel (or CRAN) for next time:


8.0.0 Appendix I: Installing your own copy of R

8.1.0 Jupyter Notebooks and the R kernel

For this introductory course we will be teaching and running code for R through Jupyter notebooks. In this section we will discuss

  1. Installation of Jupyter (through Anaconda)
  2. Updating the default R package
  3. Starting up your Jupyter notebooks

8.1.1 Installing R and Jupyter Notebooks (via Anaconda3)

As of 2021-01-18, The latest version of Anaconda3 runs with Python 3.8

Download the OS-appropriate version from here https://www.anaconda.com/products/individual

8.1.2 Updating the base version of R

As of 2020-12-11, the lastest version of r-base available for Anaconda is 4.0.3 but Anaconda comes pre-installed with R 3.6.1. To save time, we will update just our r-base (version) through the command line using the Anaconda prompt. You'll need to find the menu shortcut to the prompt in order to run these commands. Before class you should update all of your anaconda packages. This will be sure to get you the latest version of Jupyter notebook. Open up the Anaconda prompt and type the following command:

conda update --all

It will ask permission to continue at some point. Say 'yes' to this. After this is completed, use the following command:

conda install -c conda-forge/label/main r-base=4.0.3=hddad469_3

Anaconda will try to install a number of R-related packages. Say 'yes' to this.

8.1.3 Loading the R-kernel for your Jupyter notebook

Lastly, we want to connect your R version to the Jupyter notebook itself. Type the following command:

conda install -c r r-irkernel

Jupyter should now have R integrated into it. No need to build an extra environment to run it.

8.1.3.1 A quick note about Anaconda environments

You may find that for some reason or another, you'd like to maintain a specific R-environment (or other) to work in. Environments in Anaconda work like isolated sandbox versions of Anaconda within Anaconda. When you generate an environment for the first time, it will draw all of its packages and information from the base version of Anaconda - kind of like making a copy. You can also create these in the Anaconda prompt. You can even create new environments based on specific versions or installations of other programs. For instance, we could have tried to make an environment for R 4.0.3 with the command

conda create -n my_R_env -c conda-forge/label/main r-base=4.0.3=hddad469_3

This would create a new environment with version 4.0.3 of R but the base version of Anaconda would retain version 3.6.1 of R. A small but helpful detail if you are unsure about newer versions of packages that you'd like to use.

Likewise, you can update and install packages in new environments without affecting or altering your base environment! Again it's helpful if you're upgrading or installing new packages and programs. If you're not sure how it will affect what you already have in place, you can just install them straight into an environment.

For more information: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#cloning-an-environment

8.1.3.2 Using the Anaconda navigator to make a Jupyter notebook

If you are inclined, the Anaconda Navigator can help you make an R environment separate from the base, but you won't be able to perform the same fancy tricks as in the prompt, like installing new packages directly to a new environment.

Note: You should consider doing this only if you have a good reason to isolate what you're doing in R from the Anaconda base packages. You will also need to have installed r-base 4.0.3 to make a new environment with it through the Anaconda navigator.

The Anaconda navigator is a graphical interface that shows all fo your pre-installed packages and give you access to installing other common programs like RStudio (we'll get to that in a moment).

You will now have an R environment where you can install specific R packages that won't make their way into your Anaconda base.

You will likely find a shortcut to this environment in your (Windows) menu under the Anaconda folder. It will look something like Jupyter Notebook (R-4-0-3)

8.1.3.3 Installing packages for your personal Jupyter Notebook

Normally I suggest avoiding installing packages through your Jupyter Notebook. Instead, if you want to update your R packages for running Jupyter, it's best to add them through either the Anaconda prompt or Anaconda navigator. Again, using the prompt gives you more options but can seem a little more complicated.

One of the most useful packages to install for R is r-essentials. Open up the Anaconda prompt and use the command: conda install -c r r-essentials. After running, the Anaconda prompt will inform you of any package dependencies and it will identify which packages will be updated, newly installed, or removed (unlikely).

Anaconda has multiple channels (similar to repositories) that exist and are maintained by different groups. These various channels port over regular R packages to a format that can be installed in Anaconda and run by R. The two main channels you'll find useful for this are the r channel and conda-forge channel. You can find more information about all of the packages on docs.anaconda.com. As you might have guessed the basic format for installing packages is this: conda install -c channel-name r-package where

conda-install is the call to install packages. This can be done in a base or custom environment -c channel-name identifies that you wish to name a specific channel to install from r-package is the name of your package and most of them will begin with r- ie r-ggplot2


8.2.0 R and RStudio

8.2.1 Installing R

As of 2020-06-25, the latest stable R version is 4.0.3:

Windows:

- Go to <http://cran.utstat.utoronto.ca/>      
- Click on 'Download R for Windows'     
- Click on 'install R for the first time'     
- Click on 'Download R 4.0.3 for Windows' (or a newer version)     
- Double-click on the .exe file once it has downloaded and follow the instructions.

(Mac) OS X:

- Go to <http://cran.utstat.utoronto.ca/>      
- Click on 'Download R for (Mac) OS X'     
- Click on R-4.0.3.pkg (or a newer version)     
- Open the .pkg file once it has downloaded and follow the instructions.


Linux:

- Open a terminal (Ctrl + alt + t)
- sudo apt-get update     
- sudo apt-get install r-base     
- sudo apt-get install r-base-dev (so you can compile packages from source)


8.2.2 Installing RStudio

As of 2021-01-18, the latest RStudio version is 1.4.1103

Windows:

- Go to <https://www.rstudio.com/products/rstudio/download/#download>     
- Click on 'RStudio 1.3.1093 - Windows Vista/7/8/10' to download the installer (or a newer version)     
- Double-click on the .exe file once it has downloaded and follow the instructions.

(Mac) OS X:

- Go to <https://www.rstudio.com/products/rstudio/download/#download>     
- Click on 'RStudio 1.3.1093 - Mac OS X 10.13+ (64-bit)' to download the installer (or a newer version)     
- Double-click on the .dmg file once it has downloaded and follow the instructions.     


Linux:

- Go to <https://www.rstudio.com/products/rstudio/download/#download>     
- Click on the installer that describes your Linux distribution, e.g. 'RStudio 1.3.1093 - Ubuntu 18/Debian 10(64-bit)' (or a newer version)     
- Double-click on the .deb file once it has downloaded and follow the instructions.     
- If double-clicking on your .deb file did not open the software manager, open the terminal (Ctrl + alt + t) and type **sudo dpkg -i /path/to/installer/rstudio-xenial-1.3.959-amd64.deb**

 _Note: You have 3 things that could change in this last command._     
 1. This assumes you have just opened the terminal and are in your home directory. (If not, you have to modify your path. You can get to your home directory by typing cd ~.)     
 2. This assumes you have downloaded the .deb file to Downloads. (If you downloaded the file somewhere else, you have to change the path to the file, or download the .deb file to Downloads).      
 3. This assumes your file name for .deb is the same as above. (Put the name matching the .deb file you downloaded).

If you have a problem with installing R or RStudio, you can also try to solve the problem yourself by Googling any error messages you get. You can also try to get in touch with me or the course TAs.


8.2.3 Getting to know the RStudio environment

RStudio is an IDE (Integrated Development Environment) for R that provides a more user-friendly experience than using R in a terminal setting. It has 4 main areas or panes, which you can customize to some extent under Tools > Global Options > Pane Layout:

  1. Source - The code you are annotating and keeping in your script.
  2. Console - Where your code is executed.
  3. Environment - What global objects you have created and functions you have written/sourced.
    History - A record of all the code you have executed in the console.
    Connections - Which data sources you are connecting to. (Not being used in this course.)
  4. Files, Plots, Packages, Help, Viewer - self-explanatoryish if you click on their tabs.

All of the panes can be minimized or maximized using the large and small box outlines in the top right of each pane.

R_studio_default_layout.jpg

8.2.3.1 Source

The Source is where you are keeping the code and annotation that you want to be saved as your script. The tab at the top left of the pane has your script name (i.e. 'Untitled.R'), and you can switch between scripts by toggling the tabs. You can save, search or publish your source code using the buttons along the pane header. Code in the Source pane is run or executed automatically.

To run your current line of code or a highlighted segment of code from the Source pane you can:
a) click the button 'Run' -> 'Run Selected Line(s)',
b) click 'Code' -> 'Run Selected Line(s)' from the menu bar,
c) use the keyboard shortcut CTRL + ENTER (Windows & Linux) Command + ENTER (Mac) (recommended),
d) copy and paste your code into the Console and hit Enter (not recommended).

There are always many ways to do things in R, but the fastest way will always be the option that keeps your hands on the keyboard.

8.2.3.2 Console

You can also type and execute your code (by hitting ENTER) in the Console when the > prompt is visible. If you enter code and you see a + instead of a prompt, R doesn't think you are finished entering code (i.e. you might be missing a bracket). If this isn't immediately fixable, you can hit Esc twice to get back to your prompt. Using the up and down arrow keys, you can find previous commands in the Console if you want to rerun code or fix an error resulting from a typo.

On the Console tab in the top left of that pane is your current working directory. Pressing the arrow next to your working directory will open your current folder in the Files pane. If you find your Console is getting too cluttered, selecting the broom icon in that pane will clear it for you. The Console also shows information: upon start up about R (such as version number), during the installation of packages, when there are warnings, and when there are errors.

8.2.3.3 Environment

In the Global Environment you can see all of the stored objects you have created or sourced (imported from another script). The Global Environment can become cluttered, so it also has a broom button to clear its workspace.

Objects are made by using the assignment operator <-. On the left side of the arrow, you have the name of your object. On the right side you have what you are assigning to that object. In this sense, you can think of an object as a container. The container holds the values given as well as information about 'class' and 'methods' (which we will come back to).

Type x <- c(2,4) in the Console followed by Enter. 1D objects' data types can be seen immediately as well as their first few values. Now type y <- data.frame(numbers = c(1,2,3), letters = c("a","b","c")) in the Console followed by Enter. You can immediately see the dimension of 2D objects, and you can check the structure of data frames and lists (more later) by clicking on the object's arrow. Clicking on the object name will open the object to view in a new tab. Custom functions created in session or sourced will also appear in this pane.

The Environment pane dropdown displays all of the currently loaded packages in addition to the Global Environment. Loaded means that all of the tools/functions in the package are available for use. R comes with a number of packages pre-loaded (i.e. base, grDevices).

In the History tab are all of the commands you have executed in the Console during your session. You can select a line of code and send it to the Source or Console.

The Connections tab is to connect to data sources such as Spark and will not be used in this lesson.

8.2.3.4 Files, Plots, Packages, Help, Viewer

The Files tab allows you to search through directories; you can go to or set your working directory by making the appropriate selection under the More (blue gear) drop-down menu. The ... to the top left of the pane allows you to search for a folder in a more traditional manner.

The Plots tab is where plots you make in a .R script will appear (notebooks and markdown plots will be shown in the Source pane). There is the option to Export and save these plots manually.

The Packages tab has all of the packages that are installed and their versions, and buttons to Install or Update packages. A check mark in the box next to the package means that the package is loaded. You can load a package by adding a check mark next to a package, however it is good practice to instead load the package in your script to aid in reproducibility.

The Help menu has the documentation for all packages and functions. For each function you will find a description of what the function does, the arguments it takes, what the function does to the inputs (details), what it outputs, and an example. Some of the help documentation is difficult to read or less than comprehensive, in which case goggling the function is a good idea.

The Viewer will display vignettes, or local web content such as a Shiny app, interactive graphs, or a rendered html document.

8.2.3.5 Global Options

I suggest you take a look at Tools -> Global Options to customize your experience.

For example, under Code -> Editing I have selected Soft-wrap R source files followed by Apply so that my text will wrap by itself when I am typing and not create a long line of text.

You may also want to change the Appearance of your code. I like the RStudio theme: Modern and Editor font: Ubuntu Mono, but pick whatever you like! Again, you need to hit Apply to make changes.

That whirlwind tour isn't everything the IDE can do, but it is enough to get started.

9.0.0 Appendix II: A quick note on GNU-Linux directory structure and navigation

This is an image of a possible directory.

unix_boys.jpeg

In this hierarchy we will pretend to be benedict, and we are hanging out in our Tables folder. R looks to read in your files from your working directory, which in this case would be Tables. At this moment, R would have access to proof.tsv and genes.csv. If I tried to open paper.txt under benedict, R would tell me there is no such file in my current working directory.

To get your working directory in R you would type in your code cell:

getwd()

You would then press Ctrl+Enter (Ctrl+Enter in Linux, command+Enter in Mac) to execute the code in the cell. The output below your Console would be:

'/home/benedict/Tables'

R will always tell you your absolute directory. An absolute directory is a path starting from your root "/". The absolute directory can vary from computer to computer. My home directory and your home directory are not the same; our names differ in the path.

To move directories, it is good to know a couple shortcuts. '.' is your current directory. '..' is up one directory level. '~' is your home directory (a shortcut for "/home/benedict"). Therefore, our current location could also be denoted as "~/Tables".

To move to the directory ewan we use a function that will set the working directory:

setwd("/home/ewan")

or

setwd("~/ewan")

A relative directory is a path starting from wherever you currently are (AKA your working directory). This path could be the same on your computer and my computer if and only if we have the same directory structure.

If I wanted to move back to Tables using the absolute path, I would set a new working directory:

setwd("/home/benedict/Tables")

or

setwd("~/benedict/Tables")

And the relative path would be:

setwd("../benedict/Tables")

There is some talk over setting working directories within scripts. Obviously, not everyone has the same absolute path, so if must set a directory in your script, it is best to have a relative path starting from the folder your script is in. Keep in mind that others you share your script with might not have the same directory structure if you refer to sub-directories.

You can set your working directory by:

  1. setwd()

In RStudio you may also...

  1. Session -> Set Working Directory (3 Options)
  2. Files Window -> More (Gear Symbol) -> Set As Working Directory

CAGEF_new.png